Efficient k-Winner-Take-All Competitive Learning Hardware Architecture for On-Chip Learning
نویسندگان
چکیده
A novel k-winners-take-all (k-WTA) competitive learning (CL) hardware architecture is presented for on-chip learning in this paper. The architecture is based on an efficient pipeline allowing k-WTA competition processes associated with different training vectors to be performed concurrently. The pipeline architecture employs a novel codeword swapping scheme so that neurons failing the competition for a training vector are immediately available for the competitions for the subsequent training vectors. The architecture is implemented by the field programmable gate array (FPGA). It is used as a hardware accelerator in a system on programmable chip (SOPC) for realtime on-chip learning. Experimental results show that the SOPC has significantly lower training time than that of other k-WTA CL counterparts operating with or without hardware support.
منابع مشابه
A Learning Rule for Universal Approximators with a Single Non-Linearity
A learning algorithm is presented for circuits consisting of a single soft winner-take-all or k-winner-take-all gate applied to linear sums. We show that for these circuits gradient descent with regard to a suitable error function does not run into the familiar credit-assignment-problem. Furthermore, in contrast to backprop for multi-layer perceptrons this learning algorithm does not require th...
متن کاملAn Efficient Hardware Circuit for Spike Sorting Based on Competitive Learning Networks
This study aims to present an effective VLSI circuit for multi-channel spike sorting. The circuit supports the spike detection, feature extraction and classification operations. The detection circuit is implemented in accordance with the nonlinear energy operator algorithm. Both the peak detection and area computation operations are adopted for the realization of the hardware architecture for f...
متن کاملA Technique for Mapping Optimization Solutions into Hardware
Feedback neural architectures have addressed a myriad of optimization problems—almost exclusively in slow software simulation. Speed is promised only when implemented in high-speed recurrent hardware. Unfortunately, hardware’s discrepancies from ideal conspire to produce poor solutions. This paper describes a recurrent hardware-in-the-loop learning procedure that maps idealized solutions of opt...
متن کاملPii: S0893-6080(99)00028-3
One of the classical topics in neural networks is winner-take-all (WTA), which has been widely used in unsupervised (competitive) learning, cortical processing, and attentional control. Owing to global connectivity, WTA networks, however, do not encode spatial relations in the input, and thus cannot support sensory and perceptual processing where spatial relations are important. We propose a ne...
متن کاملObject selection based on oscillatory correlation
One of the classical topics in neural networks is winner-take-all (WTA), which has been widely used in unsupervised (competitive) learning, cortical processing, and attentional control. Owing to global connectivity, WTA networks, however, do not encode spatial relations in the input, and thus cannot support sensory and perceptual processing where spatial relations are important. We propose a ne...
متن کامل